Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Facial expressions of emotion in speech and singing

Identifieur interne : 000404 ( PascalFrancis/Checkpoint ); précédent : 000403; suivant : 000405

Facial expressions of emotion in speech and singing

Auteurs : Nicole Scotto Di Carlo [France] ; Isabelle Guaïtella [France]

Source :

RBID : Francis:524-05-11634

Descripteurs français

English descriptors

Abstract

An experiment dealing with the recognition of emotions in speech and in singing for two subject populations (opera amateurs and non-amateurs) was conducted to assess the respective roles of a professional singer's voice and facial expressions in the perception of emotions, and to determine how the audience decodes those emotions. The results of visual, auditory, and audiovisual perception tests showed that emotions expressed during speech were identified better than those expressed during singing. While facial expressions appear to play an important role, the voice proved to be a relatively poor conveyor of emotional information. When voice and facial expression were combined, the recognition of emotions improved slightly for speech but not for singing, where the acoustic cues of emotions are partially destroyed and therefore do not provide any additional information


Affiliations:


Links toward previous steps (curation, corpus...)


Links to Exploration step

Francis:524-05-11634

Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en" level="a">Facial expressions of emotion in speech and singing</title>
<author>
<name sortKey="Di Carlo, Nicole Scotto" sort="Di Carlo, Nicole Scotto" uniqKey="Di Carlo N" first="Nicole Scotto" last="Di Carlo">Nicole Scotto Di Carlo</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>French National Research Center (CNRS)</s1>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>France</country>
<wicri:noRegion>French National Research Center (CNRS)</wicri:noRegion>
<wicri:noRegion>French National Research Center (CNRS)</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Guaitella, Isabelle" sort="Guaitella, Isabelle" uniqKey="Guaitella I" first="Isabelle" last="Guaïtella">Isabelle Guaïtella</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>French National Research Center (CNRS)</s1>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>France</country>
<wicri:noRegion>French National Research Center (CNRS)</wicri:noRegion>
<wicri:noRegion>French National Research Center (CNRS)</wicri:noRegion>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">INIST</idno>
<idno type="inist">524-05-11634</idno>
<date when="2004">2004</date>
<idno type="stanalyst">FRANCIS 524-05-11634 INIST</idno>
<idno type="RBID">Francis:524-05-11634</idno>
<idno type="wicri:Area/PascalFrancis/Corpus">000417</idno>
<idno type="wicri:Area/PascalFrancis/Curation">000838</idno>
<idno type="wicri:Area/PascalFrancis/Checkpoint">000404</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en" level="a">Facial expressions of emotion in speech and singing</title>
<author>
<name sortKey="Di Carlo, Nicole Scotto" sort="Di Carlo, Nicole Scotto" uniqKey="Di Carlo N" first="Nicole Scotto" last="Di Carlo">Nicole Scotto Di Carlo</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>French National Research Center (CNRS)</s1>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>France</country>
<wicri:noRegion>French National Research Center (CNRS)</wicri:noRegion>
<wicri:noRegion>French National Research Center (CNRS)</wicri:noRegion>
</affiliation>
</author>
<author>
<name sortKey="Guaitella, Isabelle" sort="Guaitella, Isabelle" uniqKey="Guaitella I" first="Isabelle" last="Guaïtella">Isabelle Guaïtella</name>
<affiliation wicri:level="1">
<inist:fA14 i1="01">
<s1>French National Research Center (CNRS)</s1>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</inist:fA14>
<country>France</country>
<wicri:noRegion>French National Research Center (CNRS)</wicri:noRegion>
<wicri:noRegion>French National Research Center (CNRS)</wicri:noRegion>
</affiliation>
</author>
</analytic>
<series>
<title level="j" type="main">Semiotica</title>
<title level="j" type="abbreviated">Semiotica</title>
<idno type="ISSN">0037-1998</idno>
<imprint>
<date when="2004">2004</date>
</imprint>
</series>
</biblStruct>
</sourceDesc>
<seriesStmt>
<title level="j" type="main">Semiotica</title>
<title level="j" type="abbreviated">Semiotica</title>
<idno type="ISSN">0037-1998</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="KwdEn" xml:lang="en">
<term>Auditory perception</term>
<term>Communication</term>
<term>Emotion</term>
<term>Experimental study</term>
<term>Semiotics</term>
<term>Singing voice</term>
<term>Visual perception</term>
</keywords>
<keywords scheme="Pascal" xml:lang="fr">
<term>Sémiotique</term>
<term>Communication</term>
<term>Emotion</term>
<term>Voix chantée</term>
<term>Perception visuelle</term>
<term>Perception auditive</term>
<term>Etude expérimentale</term>
<term>Expression faciale</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">An experiment dealing with the recognition of emotions in speech and in singing for two subject populations (opera amateurs and non-amateurs) was conducted to assess the respective roles of a professional singer's voice and facial expressions in the perception of emotions, and to determine how the audience decodes those emotions. The results of visual, auditory, and audiovisual perception tests showed that emotions expressed during speech were identified better than those expressed during singing. While facial expressions appear to play an important role, the voice proved to be a relatively poor conveyor of emotional information. When voice and facial expression were combined, the recognition of emotions improved slightly for speech but not for singing, where the acoustic cues of emotions are partially destroyed and therefore do not provide any additional information</div>
</front>
</TEI>
<inist>
<standard h6="B">
<pA>
<fA01 i1="01" i2="1">
<s0>0037-1998</s0>
</fA01>
<fA03 i2="1">
<s0>Semiotica</s0>
</fA03>
<fA05>
<s2>149</s2>
</fA05>
<fA06>
<s2>1-4</s2>
</fA06>
<fA08 i1="01" i2="1" l="ENG">
<s1>Facial expressions of emotion in speech and singing</s1>
</fA08>
<fA11 i1="01" i2="1">
<s1>DI CARLO (Nicole Scotto)</s1>
</fA11>
<fA11 i1="02" i2="1">
<s1>GUAÏTELLA (Isabelle)</s1>
</fA11>
<fA14 i1="01">
<s1>French National Research Center (CNRS)</s1>
<s3>FRA</s3>
<sZ>1 aut.</sZ>
<sZ>2 aut.</sZ>
</fA14>
<fA20>
<s1>37-55</s1>
</fA20>
<fA21>
<s1>2004</s1>
</fA21>
<fA23 i1="01">
<s0>ENG</s0>
</fA23>
<fA43 i1="01">
<s1>INIST</s1>
<s2>15399</s2>
<s5>354000110515060020</s5>
</fA43>
<fA44>
<s0>0000</s0>
<s1>© 2005 INIST-CNRS. All rights reserved.</s1>
</fA44>
<fA45>
<s0>17 ref.</s0>
</fA45>
<fA47 i1="01" i2="1">
<s0>524-05-11634</s0>
</fA47>
<fA60>
<s1>P</s1>
</fA60>
<fA61>
<s0>A</s0>
</fA61>
<fA64 i1="01" i2="1">
<s0>Semiotica</s0>
</fA64>
<fA66 i1="01">
<s0>DEU</s0>
</fA66>
<fA68 i1="01" i2="1" l="FRE">
<s1>L'expression faciale des émotions dans la parole et le chant</s1>
</fA68>
<fC01 i1="01" l="ENG">
<s0>An experiment dealing with the recognition of emotions in speech and in singing for two subject populations (opera amateurs and non-amateurs) was conducted to assess the respective roles of a professional singer's voice and facial expressions in the perception of emotions, and to determine how the audience decodes those emotions. The results of visual, auditory, and audiovisual perception tests showed that emotions expressed during speech were identified better than those expressed during singing. While facial expressions appear to play an important role, the voice proved to be a relatively poor conveyor of emotional information. When voice and facial expression were combined, the recognition of emotions improved slightly for speech but not for singing, where the acoustic cues of emotions are partially destroyed and therefore do not provide any additional information</s0>
</fC01>
<fC02 i1="01" i2="L">
<s0>52461</s0>
<s1>XIII</s1>
</fC02>
<fC02 i1="02" i2="L">
<s0>524</s0>
</fC02>
<fC03 i1="01" i2="L" l="FRE">
<s0>Sémiotique</s0>
<s2>NI</s2>
<s5>01</s5>
</fC03>
<fC03 i1="01" i2="L" l="ENG">
<s0>Semiotics</s0>
<s2>NI</s2>
<s5>01</s5>
</fC03>
<fC03 i1="02" i2="L" l="FRE">
<s0>Communication</s0>
<s2>NI</s2>
<s5>02</s5>
</fC03>
<fC03 i1="02" i2="L" l="ENG">
<s0>Communication</s0>
<s2>NI</s2>
<s5>02</s5>
</fC03>
<fC03 i1="03" i2="L" l="FRE">
<s0>Emotion</s0>
<s2>NI</s2>
<s5>03</s5>
</fC03>
<fC03 i1="03" i2="L" l="ENG">
<s0>Emotion</s0>
<s2>NI</s2>
<s5>03</s5>
</fC03>
<fC03 i1="04" i2="L" l="FRE">
<s0>Voix chantée</s0>
<s2>NI</s2>
<s5>04</s5>
</fC03>
<fC03 i1="04" i2="L" l="ENG">
<s0>Singing voice</s0>
<s2>NI</s2>
<s5>04</s5>
</fC03>
<fC03 i1="05" i2="L" l="FRE">
<s0>Perception visuelle</s0>
<s2>NI</s2>
<s5>05</s5>
</fC03>
<fC03 i1="05" i2="L" l="ENG">
<s0>Visual perception</s0>
<s2>NI</s2>
<s5>05</s5>
</fC03>
<fC03 i1="06" i2="L" l="FRE">
<s0>Perception auditive</s0>
<s2>NI</s2>
<s5>06</s5>
</fC03>
<fC03 i1="06" i2="L" l="ENG">
<s0>Auditory perception</s0>
<s2>NI</s2>
<s5>06</s5>
</fC03>
<fC03 i1="07" i2="L" l="FRE">
<s0>Etude expérimentale</s0>
<s2>NI</s2>
<s5>07</s5>
</fC03>
<fC03 i1="07" i2="L" l="ENG">
<s0>Experimental study</s0>
<s2>NI</s2>
<s5>07</s5>
</fC03>
<fC03 i1="08" i2="L" l="FRE">
<s0>Expression faciale</s0>
<s4>INC</s4>
<s5>31</s5>
</fC03>
<fN21>
<s1>248</s1>
</fN21>
</pA>
</standard>
</inist>
<affiliations>
<list>
<country>
<li>France</li>
</country>
</list>
<tree>
<country name="France">
<noRegion>
<name sortKey="Di Carlo, Nicole Scotto" sort="Di Carlo, Nicole Scotto" uniqKey="Di Carlo N" first="Nicole Scotto" last="Di Carlo">Nicole Scotto Di Carlo</name>
</noRegion>
<name sortKey="Guaitella, Isabelle" sort="Guaitella, Isabelle" uniqKey="Guaitella I" first="Isabelle" last="Guaïtella">Isabelle Guaïtella</name>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/OperaV1/Data/PascalFrancis/Checkpoint
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000404 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/PascalFrancis/Checkpoint/biblio.hfd -nk 000404 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    PascalFrancis
   |étape=   Checkpoint
   |type=    RBID
   |clé=     Francis:524-05-11634
   |texte=   Facial expressions of emotion in speech and singing
}}

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Thu Jan 4 23:09:23 2024